尽管目前基于深度学习的方法在盲目的单图像超分辨率(SISR)任务中已获得了有希望的表现,但其中大多数主要集中在启发式上构建多样化的网络体系结构,并更少强调对Blur之间的物理发电机制的明确嵌入内核和高分辨率(HR)图像。为了减轻这个问题,我们提出了一个模型驱动的深神经网络,称为blind SISR。具体而言,为了解决经典的SISR模型,我们提出了一种简单的效果迭代算法。然后,通过将所涉及的迭代步骤展开到相应的网络模块中,我们自然构建了KXNET。所提出的KXNET的主要特异性是整个学习过程与此SISR任务的固有物理机制完全合理地集成在一起。因此,学习的模糊内核具有清晰的物理模式,并且模糊内核和HR图像之间的相互迭代过程可以很好地指导KXNET沿正确的方向发展。关于合成和真实数据的广泛实验很好地证明了我们方法的卓越准确性和一般性超出了当前代表性的最先进的盲目SISR方法。代码可在:\ url {https://github.com/jiahong-fu/kxnet}中获得。
translated by 谷歌翻译
在线知识蒸馏(OKD)通过相互利用教师和学生之间的差异来改善所涉及的模型。它们之间的差距上有几个关键的瓶颈 - 例如,为什么以及何时以及何时损害表现,尤其是对学生的表现?如何量化教师和学生之间的差距? - 接受了有限的正式研究。在本文中,我们提出了可切换的在线知识蒸馏(Switokd),以回答这些问题。 Switokd的核心思想不是专注于测试阶段的准确性差距,而是通过两种模式之间的切换策略来适应训练阶段的差距,即蒸馏差距 - 专家模式(暂停老师,同时暂停教师保持学生学习)和学习模式(重新启动老师)。为了拥有适当的蒸馏差距,我们进一步设计了一个自适应开关阈值,该阈值提供了有关何时切换到学习模式或专家模式的正式标准,从而改善了学生的表现。同时,老师从我们的自适应切换阈值中受益,并基本上与其他在线艺术保持同步。我们进一步将Switokd扩展到具有两个基础拓扑的多个网络。最后,广泛的实验和分析验证了Switokd在最新面前的分类的优点。我们的代码可在https://github.com/hfutqian/switokd上找到。
translated by 谷歌翻译
本文通过连续行动解决了非平稳环境和游戏中的政策学习。我们提出了一种无需重新格局样式的增强算法porl,而不是受到跟随规范化领导者(FTRL)和镜像下降(MD)更新的想法的启发,而不是经典的奖励最大化机制。我们证明,PORL具有最后的融合保证,这对于对抗和合作游戏很重要。实证研究表明,在控制任务的静态环境中,PORL的性能同样好,甚至比软crip-Critic(SAC)算法更好。在包括动态环境,对抗性训练和竞争性游戏在内的非机构环境中,PORL在更好的最终政策表现和更稳定的培训过程中都优于SAC。
translated by 谷歌翻译
最近的工作表明,大型审慎的语言模型(LMS)不仅可以在一系列自然语言处理(NLP)任务上表现出色,而且还可以开始改进推理任务,例如算术诱导,象征性操纵,并随着规模的增加而进行常识性推理。模型。但是,目前尚不清楚这些LMS的潜在能力是什么。令人惊讶的是,我们发现这些模型对某些基本的符号操纵任务有局限性,例如复制,反向和加法。当符号总数或重复符号增加时,模型性能会迅速下降。我们研究了这种现象背后的潜在原因,并检查了一组可能的方法,包括明确的位置标记,细粒度的计算步骤以及具有可呼出程序的LMS。实验结果表明,这些技术都无法完全解决最简单的添加感应问题。最后,我们向导师介绍LMS,这展示了每一个教学的步骤。 LMS带有导师的LMS能够在OOD和重复符号的情况下提供100%的精度,从而在诱导中对大型LMS边界产生新的见解。
translated by 谷歌翻译
$ k $ -means集群是各学科的基本问题。此问题是非核解,并且标准算法仅保证找到本地最佳算法。利用[1]的本地解决方案的结构,我们提出了一种用于逃离不良局部解决方案并恢复全球解决方案(或地面真理)的一般算法框架。该框架包括迭代:(i)在本地解决方案中检测MIS指定的群集,并通过非本地操作来改进当前本地解决方案。我们讨论这些步骤的实施,并阐明所提出的框架如何从几何视角统一文献中的k $ -means算法的变体。此外,我们介绍了所提出的框架的两个自然扩展,其中初始数量的群集被遗漏。我们为我们的方法提供了理论理的理由,这是通过广泛的实验证实的。
translated by 谷歌翻译
非常大的预培训的语言模型(PTM)(如GPT-3)通常被释放为服务,允许用户设计特定于任务的提示以通过一些黑盒API查询PTMS。在这样的场景中,我们调用语言模型 - AS-Service(LMAAS),PTM的梯度通常不可用。我们可以通过仅访问模型推断API来优化任务提示吗?基于最近的观察结果,大型PTMS具有非常低的内在维度,这项工作提出了黑匣子调谐,通过无衍生算法优化PTM。特别是,我们通过迭代调用PTM推断API来调用CMA-es以优化预先提示的连续提示。我们的实验结果表明,黑匣子调整罗伯塔在少数标签样本上不仅显着优于手动提示和GPT-3的上下文学习,而且还超越了基于梯度的对应物,即提示调整和完整的模型调整。
translated by 谷歌翻译
声音事件检测(SED)在监控,视频索引等中的广泛应用程序上获得了越来越长的关注。SED中的现有模型主要产生帧级预测,将其转换为序列多标签分类问题。基于帧的模型的一个关键问题是它追求最佳的帧级预测而不是最佳的事件级预测。此外,它需要后处理,无法以端到端的方式培训。本文首先介绍了一维检测变压器(1D-DETR),受到图像对象检测的检测变压器的启发。此外,鉴于SED的特征,音频查询分支和用于微调的一对多匹配策略将模型添加到1D-DETR以形成声音事件检测变压器(SEDT)。据我们所知,Sedt是第一个基于事件和最终的SED模型。实验在城市 - SED数据集和DCES2019任务4数据集上进行,两者都表明席克可以实现竞争性能。
translated by 谷歌翻译
As a common weather, rain streaks adversely degrade the image quality. Hence, removing rains from an image has become an important issue in the field. To handle such an ill-posed single image deraining task, in this paper, we specifically build a novel deep architecture, called rain convolutional dictionary network (RCDNet), which embeds the intrinsic priors of rain streaks and has clear interpretability. In specific, we first establish a RCD model for representing rain streaks and utilize the proximal gradient descent technique to design an iterative algorithm only containing simple operators for solving the model. By unfolding it, we then build the RCDNet in which every network module has clear physical meanings and corresponds to each operation involved in the algorithm. This good interpretability greatly facilitates an easy visualization and analysis on what happens inside the network and why it works well in inference process. Moreover, taking into account the domain gap issue in real scenarios, we further design a novel dynamic RCDNet, where the rain kernels can be dynamically inferred corresponding to input rainy images and then help shrink the space for rain layer estimation with few rain maps so as to ensure a fine generalization performance in the inconsistent scenarios of rain types between training and testing data. By end-to-end training such an interpretable network, all involved rain kernels and proximal operators can be automatically extracted, faithfully characterizing the features of both rain and clean background layers, and thus naturally lead to better deraining performance. Comprehensive experiments substantiate the superiority of our method, especially on its well generality to diverse testing scenarios and good interpretability for all its modules. Code is available in \emph{\url{https://github.com/hongwang01/DRCDNet}}.
translated by 谷歌翻译
Due to their ability to offer more comprehensive information than data from a single view, multi-view (multi-source, multi-modal, multi-perspective, etc.) data are being used more frequently in remote sensing tasks. However, as the number of views grows, the issue of data quality becomes more apparent, limiting the potential benefits of multi-view data. Although recent deep neural network (DNN) based models can learn the weight of data adaptively, a lack of research on explicitly quantifying the data quality of each view when fusing them renders these models inexplicable, performing unsatisfactorily and inflexible in downstream remote sensing tasks. To fill this gap, in this paper, evidential deep learning is introduced to the task of aerial-ground dual-view remote sensing scene classification to model the credibility of each view. Specifically, the theory of evidence is used to calculate an uncertainty value which describes the decision-making risk of each view. Based on this uncertainty, a novel decision-level fusion strategy is proposed to ensure that the view with lower risk obtains more weight, making the classification more credible. On two well-known, publicly available datasets of aerial-ground dual-view remote sensing images, the proposed approach achieves state-of-the-art results, demonstrating its effectiveness. The code and datasets of this article are available at the following address: https://github.com/gaopiaoliang/Evidential.
translated by 谷歌翻译
Existing federated classification algorithms typically assume the local annotations at every client cover the same set of classes. In this paper, we aim to lift such an assumption and focus on a more general yet practical non-IID setting where every client can work on non-identical and even disjoint sets of classes (i.e., client-exclusive classes), and the clients have a common goal which is to build a global classification model to identify the union of these classes. Such heterogeneity in client class sets poses a new challenge: how to ensure different clients are operating in the same latent space so as to avoid the drift after aggregation? We observe that the classes can be described in natural languages (i.e., class names) and these names are typically safe to share with all parties. Thus, we formulate the classification problem as a matching process between data representations and class representations and break the classification model into a data encoder and a label encoder. We leverage the natural-language class names as the common ground to anchor the class representations in the label encoder. In each iteration, the label encoder updates the class representations and regulates the data representations through matching. We further use the updated class representations at each round to annotate data samples for locally-unaware classes according to similarity and distill knowledge to local models. Extensive experiments on four real-world datasets show that the proposed method can outperform various classical and state-of-the-art federated learning methods designed for learning with non-IID data.
translated by 谷歌翻译